Part II: Tutorial

Author

Maarten Marsman

Published

August 3, 2025

Workshop Roadmap

You are here:

📘 Part I — Theory ✅
🔧 Part II — Tutorial (this lecture)
🧪 Part III — Assignment (next)

Access workshop material here.

In Part I, we explored how Bayesian reasoning helps us understand whether effects are present or not.

Now in Part II, we’ll put that into practice.

You’ll learn how to run Bayesian network analyses using easybgm and JASP, interpret the output, and understand how priors and uncertainty shape your conclusions.

By the end of this session, you’ll be able to:

  • Fit a network model using Bayesian methods in R or JASP
  • Use Bayes factors to test edges, and interpret effect sizes
  • Visualize and report results clearly
  • Explore the role of prior assumptions in Bayesian models

In Part III, you’ll do all this on your own!

Before we dive into the hands-on part, let’s take a moment to remind ourselves what we covered in Part I and why it matters for what we’re about to do.

Recap of Part I

Two fundamental questions

In Part I, we discussed the two fundamental questions in network analysis.

The focus in Part I was on testing for an effect (i.e., Question 1), but in Part II we also consider estimating the size and sign of an effect (i.e., Question 2).

Distinguishing absence of evidence from evidence of absence

We discussed that network estimation cannot tell us why some effects are absent. But
the Bayesian approach helps distinguish absence of evidence from evidence of absence.

Why is this important? Think about when you’ve seen null effects in published network models. Were they truly absent? Or just underpowered?

From prior weights to posterior probabilities

We use prior probabilities to reflect uncertainty in structure before we see the data, and update them with data using Bayes’ Rule.

What is a posterior probability? It indicates the relative plausibility that a model or structure generated the observed data.

What are Bayes factors?

Bayes factors compare how well two competing hypotheses could predict the observed data: \[ \text{BF}_{10} = \frac{P(\text{data} \mid \mathcal{H}_1)}{P(\text{data} \mid \mathcal{H}_0)}. \]

  • If \(\text{BF}_{10} > 1\), the data favor \(\mathcal{H}_1\) (e.g., edge is present)
  • If \(\text{BF}_{10} < 1\), the data favor \(\mathcal{H}_0\) (e.g., edge is absent)

How do we use Bayes Factors in practice?

A Bayes factor is a continuous measure of evidence not just a binary decision. While Bayes factors are continuous, we often use interpretive categories to summarize them:

Bayesian Graphical Modeling Software

Bayesian network models are powerful, but historically they’ve been out of reach for researchers without a background in programming or statistical computing.

The Bayesian Graphical Modeling Lab contributes to an ecosystem of open tools that make these methods more accessible. Our goal is to help researchers explore network structures using principled Bayesian inference — without having to write complex MCMC code.

We work to make Bayesian graphical modeling both transparent and approachable, whether you use R or point-and-click interfaces like JASP.


Core Inference Engines

There are three main computational packages used in Bayesian network modeling. Each makes different assumptions and offers different features. Here’s a comparison:

Package Data Types Structure Learning Estimation Priors Inference Highlights
BDgraph
Continuous, Binary, Ordinal, Count† ✅ Yes ❌ No Bernoulli Model-averaged Structure learning for mixed data
BGGM
Continuous, Binary, Ordinal† ❌ No ✅ Yes None Single model Edge estimation and network comparison
bgms
Binary, Ordinal (Continuous in dev) ✅ Yes ✅ Yes Bernoulli, Beta-Bernoulli, Stochastic Block Model-averaged Structure learning + estimation, network comparison, clustering, missing data imputation

† BDgraph and BGGM use probit models for ordinal/binary data, not Markov random field models.


Why We Use easybgm and JASP

Rather than asking you to choose and learn each package separately, we use easybgm — a unified interface that wraps around the best parts of these engines.

  • Handles preprocessing, fitting, and plotting
  • Standardizes output across packages
  • Allows prior customization
  • Integrated into JASP for non-coders

Depending on your data and choices, easybgm automatically uses the appropriate back-end:
BDgraph, bgms, or (in some cases) BGGM.


Testing the Structure of the Network

We set up our analysis by loading a dataset containing responses from a sample of 2,000 individuals on 13 psychological indicators. The data includes three scale scores, and ten ordinal items.

# install.packages("easybgm")
library(easybgm)
data = read.csv("https://raw.githubusercontent.com/Bayesian-Graphical-Modelling-Lab/BGM_Workshops/refs/heads/NetPsy_COMO_2025/Part%20II/exampledata.csv")[, -1]
Code
head(data)

Step 1: Open JASP

Launch JASP and locate the main toolbar.

Step 2: Open the Data File

  1. Click the three horizontal bars in the top-left corner (☰).
  2. Select “Open” → “Computer” → “Browse…”
  3. Navigate to your .csv file and click Open.

After importing, you should see your dataset listed in the main workspace.

Step 3: Open the Bayesian Network Analysis Module

  1. Click the blue plus sign (+) at the top-right of the JASP window to open the modules menu.
  2. Scroll down to find the Network category.
  3. Under Network, click Bayesian to load the module.

Once loaded, you’ll see a new menu item: “Bayesian Network” under the Network tab in the top menu bar.

Fit the Model

Before we fit the model, it’s useful to explore the function we’ll use:

?easybgm

Here’s the full function signature to get oriented:

easybgm(
  data,
  type,
  package = NULL,
  not_cont = NULL,
  iter = 10000,
  save = FALSE,
  centrality = FALSE,
  progress = TRUE,
  posterior_method = "model-averaged",
  ...
)

The type argument tells easybgm what kind of data you’re working with — for example, "continuous", "binary", "ordinal", or "mixed". The package argument chooses the estimation engine.

To keep things fast and simple for this example, we’ll treat the data as if it were continuous (yes, even though it isn’t fully), and use BDgraph to fit the model.

Now let’s fit the model and summarize the results.

Fill in the blanks below to use the easybgm() function on the workshop dataset and inspect the summary(fit) output.

fit = easybgm(data = data,
               type = ____________,
               package = ____________,
               iter = 1e4,
               save = TRUE,
               centrality = TRUE,
               progress = TRUE)
summary(fit)               

✅ What is the total number of edges?
✅ Which edge(s) are excluded with high certainty?

This mirrors Task 1.1 in the assignment.

fit = easybgm(data = data,
               type = "continuous",
               package = "BDgraph",
               iter = 1e4,
               save = TRUE,
               centrality = TRUE,
               progress = TRUE)
summary(fit)               
 BAYESIAN ANALYSIS OF NETWORKS 
 Model type: ggm 
 Number of nodes: 13 
 Fitting Package: bdgraph 
--- 
 EDGE SPECIFIC OVERVIEW 
                  Relation Estimate Posterior Incl. Prob. Inclusion BF     Category
         depression-stress    0.599                  1.00          Inf     included
        depression-anxiety    0.333                  1.00          Inf     included
            stress-anxiety    0.206                  1.00          Inf     included
    depression-extraverted    0.000                  0.00        0.000     excluded
        stress-extraverted    0.000                  0.01        0.010     excluded
               ...             ...                    ...         ...         ...  
            
 Bayes Factors larger than 10 were considered sufficient evidence for the classification 
 Bayes factors were obtained using Bayesian model-averaging. 
 --- 
 AGGREGATED EDGE OVERVIEW 
 Number of edges with sufficient evidence for inclusion: 43 
 Number of edges with insufficient evidence: 8 
 Number of edges with sufficient evidence for exclusion: 27 
 Number of possible edges: 78 
 
 --- 
 STRUCTURE OVERVIEW 
 Number of visited structures: 17 
 Number of possible structures: 3.022315e+23 
 Posterior probability of most likely structure: 0.2856 
---

Step 1: Set up model

Step 2: Choose the summary options

Results: Bayes factor table

Evidence Thresholds

By default, easybgm uses an evidence threshold (Bayes factor) of 10:

  • BF > 10 → evidence for inclusion
  • BF < 1/10 → evidence for exclusion
  • In between → inconclusive

You can adjust this threshold to make the evidence criterion more or less strict.

Change the evidence threshold in summary(fit, evidence_thresh = ___) and see how the number of included and excluded edges changes.

✅ Which edges change their classification?
✅ What happens when you use a very lenient threshold (e.g., 1.5)?

This gives you practice for adjusting evidence sensitivity in Part III.

summary(fit, evidence_thresh = 3)
 BAYESIAN ANALYSIS OF NETWORKS
 Model type: ggm
 Number of nodes: 13
 Fitting Package: bdgraph
---
 EDGE SPECIFIC OVERVIEW
                  Relation Estimate Posterior Incl. Prob. Inclusion BF     Category
         depression-stress    0.599                  1.00          Inf     included
        depression-anxiety    0.333                  1.00          Inf     included
            stress-anxiety    0.206                  1.00          Inf     included
    depression-extraverted    0.000                  0.00        0.000     excluded
        stress-extraverted    0.000                  0.01        0.010     excluded
               ...             ...                    ...         ...         ...

 Bayes Factors larger than 3 were considered sufficient evidence for the classification
 Bayes factors were obtained using Bayesian model-averaging.
 ---
 AGGREGATED EDGE OVERVIEW
 Number of edges with sufficient evidence for inclusion: 44
 Number of edges with insufficient evidence: 3
 Number of edges with sufficient evidence for exclusion: 31
 Number of possible edges: 78

 ---
 STRUCTURE OVERVIEW
 Number of visited structures: 17
 Number of possible structures: 3.022315e+23
 Posterior probability of most likely structure: 0.2856
---

In JASP, there is no automatic classification table showing which edges are included, excluded, or inconclusive.

But don’t worry, you can determine this yourself by inspecting the inclusion Bayes factors in the output table. 💪
(And we believe in you!)

✅ A visual summary is available: the edge evidence plot, which we’ll explore in the next section.

Visualizing Results

The summary table is comprehensive but can become hard to interpret. Visualizations often help reveal patterns and structure more intuitively.

Start with the edge evidence plot:

Code
plot_edgeevidence(fit, evidence_thresh = 10, legend = FALSE)

This plot categorizes edges by their Bayes factor:

  • Blue: strong evidence for inclusion
  • Red: strong evidence for exclusion
  • Gray: inconclusive

You can lower the threshold to see how evidence strength varies:

Code
plot_edgeevidence(fit, evidence_thresh = 3, legend = FALSE)

If the network appears cluttered (a so-called “spaghetti plot”), use the split option to separate edges with any evidence for inclusion from those with evidence for exclusion.

Use plot_edgeevidence() with and without the split = TRUE option.

✅ Which edge categories become clearer when you split the plot?
✅ Do your visual conclusions match the summary table?

You’ll generate and interpret these plots in Task 1.2.

par(mfrow = c(1, 2))
plot_edgeevidence(fit, evidence_thresh = 10, split = TRUE, legend = FALSE)

The left panel shows edges with BF > 1; the right panel shows those with BF ≤ 1.

In JASP, you can plot the edge evidence using evidence categorization.

Step 1: Set up plot

Results: Evidence plot

Plotting the Network Parameters

Estimation: What is the strength and direction of the edge?

Once we’ve established which edges have credible evidence for inclusion, we can inspect the strength and direction of those connections. But which ones should we show?

In Bayesian model averaging, not all edges have convincing evidence. Some might be weak or ambiguous. By default, easybgm visualizes the median probability model, the network consisting of all edges with a posterior inclusion probability > 0.5 (or equivalently, Bayes Factor > 1). You can adjust this threshold using the exc_prob argument:

plot_network(fit, exc_prob = 0.5)

Under the hood, easybgm uses qgraph, meaning you can pass any of its arguments to fully control the layout and style. For example, grouping nodes or setting colors.

Run plot_network() with and without custom arguments like groups or layout.

✅ Does changing exc_prob influence which edges are shown?
✅ Can you group variables by scale or type?

This corresponds to Task 2.1 in the assignment.

Names_data = colnames(data)
groups_data = c(rep("DASS", 3), rep("Personality", 10))

plot_network(fit, 
             exc_prob = 0.5, 
             layout = "spring",
             nodeNames = Names_data, 
             groups = groups_data,
             color = c("#fbb20a", "#E59866"),
             theme = "Fried", 
             dashed = TRUE)

Step 1: Set up plot

Result: Network plot

Parameter Uncertainty: HDI Intervals

The network plot shows posterior means, but this hides uncertainty. A useful complement is the posterior highest density interval (HDI), which shows how stable the estimates are

This plot highlights which posterior distributions are tightly concentrated, and which are uncertain, helping you interpret results more cautiously.

Use plot_parameterHDI() to inspect parameter uncertainty.

✅ Which edges have narrow (precise) intervals?
✅ Are any edge estimates near zero?

Compare the plots for edge selection vs. full models in Task 2.3.

Code
plot_parameterHDI(fit)

JASP does not (yet) include a Highest Density Interval (HDI) plot for edge weights.

If you want to assess uncertainty in edge parameters, you’ll need to use R and the plot_parameterHDI() function from easybgm.

We expect this feature to be added in a future JASP release.

Centrality

Centrality measures attempt to quantify how “important” a node is in the network. For example, how often it sits on the shortest path between other nodes (betweenness), or how connected it is overall (degree).

While popular, centrality metrics in psychological networks have been criticized for their instability and unclear interpretation, especially in small or highly interconnected networks.

The nice thing about the Bayesian approach is that it provides credible intervals for these centrality indices, giving you a sense of how uncertain these summaries are.

plot_centrality(fit)

⚠️ Caution: Interpret centrality values as descriptive summaries — not causal influence or intervention targets. Use the credible intervals to judge whether centrality differences are meaningful.

Understanding Priors in Bayesian Network Models

Priors allow us to encode assumptions about the network before seeing the data. These assumptions affect:

  • Structure: Which edges we expect to be present
  • Parameters: How strong or weak we expect connections to be

Some prior choices in easybgm to investigate:

Prior Type Argument Example Interpretation
Structure prior inclusion_probability = 0.1 Sparse network (few edges)
Structure prior edge_prior = "Beta-Bernoulli" Allows data-driven sparsity tuning
Parameter prior interaction_scale = 0.25 Shrinks estimates toward zero (informative)
Parameter prior interaction_scale = 2.5 Allows large edge effects (diffuse, default)
Warning

Tip: Try multiple priors and see how your conclusions change. This is a key part of Bayesian modeling.

Prior Sensitivity Check

Let’s explore how different structure priors influence the analysis using two assumptions: one sparse and one dense.

#install.packages("bgms")
library("bgms")

Estimate two networks on the first five ordinal variables (i.e., data[, 4:8] in the dataset) using the bgms package:

  • One with a sparse prior: inclusion_probability = 0.1
  • One with a dense prior: inclusion_probability = 0.7

Then use summary() and plot_edgeevidence() to compare:

✅ Which edges are included or excluded?
✅ Does the posterior structure probability change?

# Sparse prior model
fit_sparse <- easybgm(
  data = data[, 4:8],
  type = "ordinal",
  package = "bgms",
  iter = 1e4,
  inclusion_probability = 0.1,
  save = TRUE
)

# Dense prior model
fit_dense <- easybgm(
  data = data[, 4:8],
  type = "ordinal",
  package = "bgms",
  iter = 1e4,
  inclusion_probability = 0.7,
  save = TRUE
)
summary(fit_sparse)
--- 
AGGREGATED EDGE OVERVIEW 
Number of edges with sufficient evidence for inclusion: 6 
Number of edges with insufficient evidence: 1 
Number of edges with sufficient evidence for exclusion: 3 
Number of possible edges: 10 
 
--- 
STRUCTURE OVERVIEW 
Number of visited structures: 4 
Number of possible structures: 1024 
Posterior probability of most likely structure: 0.986 
---
summary(fit_dense)
--- 
AGGREGATED EDGE OVERVIEW 
Number of edges with sufficient evidence for inclusion: 6 
Number of edges with insufficient evidence: 0 
Number of edges with sufficient evidence for exclusion: 4 
Number of possible edges: 10 
 
--- 
STRUCTURE OVERVIEW 
Number of visited structures: 9 
Number of possible structures: 1024 
Posterior probability of most likely structure: 0.892
--- 
# Compare edge plots
par(mfrow = c(1, 2))
plot_edgeevidence(fit_sparse, main = "Sparse prior")
plot_edgeevidence(fit_dense, main = "Dense prior")

Step 1: Set up priors

Result: Sparse network tablse

Result: Sparse network plot

Result: Dense network table

Result: Dense network plot

What Should You Do in Practice?

  • Start with defaults unless you have specific expectations.
  • If theory or data suggest otherwise, adapt the prior accordingly.
  • Always run at least one sensitivity check.
  • Report your choices transparently.

You’ll try this yourself in Part III – Optional Part 4: Prior Robustness.

What’s Next: Moving to Part III

You’ve now explored the full Bayesian workflow:

✅ Testing for edge inclusion using Bayes factors
✅ Estimating edge parameters and visualizing uncertainty
✅ Exploring the influence of priors on network conclusions
✅ Doing all of this in both R and JASP

In Part III, you’ll put these skills into practice: - Analyzing new datasets
- Interpreting edge evidence
- Running prior sensitivity checks
- Communicating your results clearly

Whether you work in code or use the JASP interface, everything you need is already in your hands!

Advanced Analysis I: Network Comparison (Optional)

In earlier sections, we tested whether an edge was present or absent within a single group.

Now we shift to a new question: > Does a network relation differ between groups?

We test whether an edge is different in strength across two (or more) independent samples. This is a Bayesian parameter comparison, where:

  • \(\mathcal{H}_0\): The edge weight is the same across groups (invariance)
  • \(\mathcal{H}_1\): The edge weight differs across groups

We model this difference as: \[ \sigma_{\text{group}} = \begin{cases} \sigma - \frac{1}{2}\delta & \text{for Group 1 (\texttt{x})}\\ \sigma + \frac{1}{2}\delta & \text{for Group 2 (\texttt{y})} \end{cases} \] This means that \(\mathcal{H}_0\text{:} \delta = 0\) and \(\mathcal{H}_1\text{:} \delta \neq 0\).

This is implemented in the bgms package via bgmCompare().

Note

You can compare two or more groups. The GitHub version of bgms supports flexible group comparisons (and also has improved output formatting):

https://github.com/Bayesian-Graphical-Modelling-Lab/bgms

To install the GitHub version (if needed):

install.packages("remotes")  # if not already installed
remotes::install_github("Bayesian-Graphical-Modelling-Lab/bgms")

Note: This version is only needed if you’re using features not yet available on CRAN.

Quick Example: Comparing Clinical vs Population Samples

# Load example data
load("Boredom.rda")
head(Boredom)

french = Boredom[Boredom[,1] == "fr", -1]
english = Boredom[Boredom[,1] != "fr", -1]

# Run network comparison
fit_compare <- bgmCompare(x = french,
                          y = english,
                          iter = 1e4)  # You may need higher values in real work

Extract and interpret results:

The CRAN version has slightly outdated output, so you’ll need to manually extract and inspect the results yourself:

# Extract posterior inclusion probabilities and differences
pip <- fit_compare$indicator[lower.tri(fit_compare$indicator)]
bf <- pip / (1 - pip)
diff_parameters <- fit_compare$pairwise_difference[lower.tri(fit_compare$pairwise_difference)]

# Extract node names
nodes <- colnames(fit_compare$indicator)

# Generate edge labels from lower triangle positions
edge_labels <- c()
for (i in 2:length(nodes)) {
  for (j in 1:(i - 1)) {
    edge_labels <- c(edge_labels, paste(nodes[i], "-", nodes[j]))
  }
}

# Combine into a labeled data frame
comparison_results <- data.frame(
  edge = edge_labels,
  BF = bf,
  difference = diff_parameters
)

# View the result
print(comparison_results, digits = 2)
                             edge          BF    difference
1          loose_ends - entertain  0.16211505 -8.083581e-03
2         loose_ends - repetitive  0.03263114 -8.113997e-04
3        loose_ends - stimulation  2.37268128  5.845597e-02
4          loose_ends - motivated  0.15313653 -7.158634e-03
5      loose_ends - keep_interest  0.09206072 -5.082763e-03
6         loose_ends - sit_around  0.14744693 -8.158050e-03
7     loose_ends - half_dead_dull  1.75709953  5.134223e-02
8          entertain - repetitive  0.03060909 -7.566605e-04
9         entertain - stimulation  0.25517761  1.223448e-02
10          entertain - motivated  0.02259945  2.042181e-04
11      entertain - keep_interest  0.31457868 -1.860028e-02
12         entertain - sit_around  0.84060372  3.573346e-02
13     entertain - half_dead_dull 55.81818182  1.013465e-01
14       repetitive - stimulation  0.09817703  3.676042e-03
15         repetitive - motivated  0.01957586 -1.843417e-04
16     repetitive - keep_interest  0.27730234 -1.366951e-02
17        repetitive - sit_around  6.35294118  7.410273e-02
18    repetitive - half_dead_dull  0.21639703 -9.106776e-03
19        stimulation - motivated  0.07353731  2.930621e-03
20    stimulation - keep_interest  0.01967982 -1.264931e-05
21       stimulation - sit_around         Inf -1.381480e-01
22   stimulation - half_dead_dull  0.02679947  6.287546e-04
23      motivated - keep_interest  0.02280863  1.447331e-05
24         motivated - sit_around  0.02176356  4.245007e-04
25     motivated - half_dead_dull  0.03939299 -1.088784e-03
26     keep_interest - sit_around  0.38007176  2.124502e-02
27 keep_interest - half_dead_dull  0.03039670 -6.228029e-04
28    sit_around - half_dead_dull 34.33568905  1.094910e-01

How to interpret?

  • \(BF_{10} > 10\) → strong evidence that an edge differs between groups
  • \(BF_{10} < 1/10\) → strong evidence that the edge is invariant
  • Everything in between → inconclusive difference

Just like earlier, we distinguish evidence for difference vs absence of evidence for difference.

Visualize differences

We can also visualize the median probability network to inspect how the edges differ between the groups:

library(qgraph)
qgraph(fit_compare$pairwise_difference * (fit_compare$indicator > 0.5), 
       posCol = palette("Okabe-Ito")[4], 
       negCol = palette("Okabe-Ito")[7])

  • Positive edges (greenish) indicate that the edge weight is larger in the English version/population and smaller in the French version/population.
  • Negative edges (orange) indicate that the edge weight is larger in the French version/population and smaller in the English version/population.

These are edges with a posterior inclusion probability above 0.5; the median probability model.


Advanced Analysis II: Network Clustering (Optional)

Another advanced feature available in bgms is network clustering, identifying groups of nodes that tend to co-occur or form modular substructures.

This is made possible by using a Stochastic Block Model (SBM) prior, which encourages clustered edge structures.

While we won’t go into detail here, an excellent hands-on walkthrough is available from Nikola Sekulovski:

https://www.nikolasekulovski.com/blog/post2/

It covers:

  • How to use the bgms SBM prior
  • How to visualize the resulting block structure
  • How to interpret the clusters in psychological networks

You can explore this in your own data!